social justice
I Teach Computer Science, and That Is Not All
"I teach computer science, and that is all," wrote Boaz Barak, of Harvard University, in a recent op-ed in The New York Times.a The main point of the op-ed was to protest the growing politicization of U.S. higher education, especially at elite universities, where we have seen many faculty members proceed from scholarship to advocacy. But in spite of the provocative title, the content of Barak's op-ed is quite more nuanced. "We should not normalize bringing one's ideology to the classroom," wrote Barak, and I could not agree more. But he also wrote that "The interaction of computer science and policy sometimes arises in my classes, and I make sure to present multiple perspectives." Here, Barak is advocating fairness and balance, rather than neutrality and avoidance of non-technical topics.
Agents on the Bench: Large Language Model Based Multi Agent Framework for Trustworthy Digital Justice
The justice system has increasingly employed AI techniques to enhance efficiency, yet limitations remain in improving the quality of decision-making, particularly regarding transparency and explainability needed to uphold public trust in legal AI. To address these challenges, we propose a large language model based multi-agent framework named AgentsBench, which aims to simultaneously improve both efficiency and quality in judicial decision-making. Our approach leverages multiple LLM-driven agents that simulate the collaborative deliberation and decision making process of a judicial bench. We conducted experiments on legal judgment prediction task, and the results show that our framework outperforms existing LLM based methods in terms of performance and decision quality. By incorporating these elements, our framework reflects real-world judicial processes more closely, enhancing accuracy, fairness, and society consideration. AgentsBench provides a more nuanced and realistic methods of trustworthy AI decision-making, with strong potential for application across various case types and legal scenarios.
- North America > United States (0.14)
- Asia > China > Beijing > Beijing (0.04)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
The Intersection of Art and Technology: Exploring the World of Digital Creativity
Art and technology have a unique intersection that has been explored in various fields, ranging from film and music to video games and gambling. With the rise of digital technology, art has found a new medium to explore and experiment with. The world of digital creativity has opened up endless possibilities for artists and creatives, allowing them to push the boundaries of what was previously thought possible. One field where this intersection has had a significant impact is gambling. Gambling has evolved from simple card games to a billion-dollar industry with the rise of online casinos and mobile apps. The gambling industry has been quick to embrace technology, and as a result, we have seen an explosion in the number of games and betting options available.
- Leisure & Entertainment > Games > Computer Games (0.71)
- Information Technology > Security & Privacy (0.70)
Designing AI: The Feminist Way
"Data is the new oil." Although the expression has found widespread use, it has not been accepted by all as relevant for several reasons. One of these is the fact that while oil is a natural resource destined to run out, the quantity and availability of data is going in the opposite direction, that is, it is constantly increasing. Another is the fact that extrapolating facts is not the same as gaining insights on which to base informed decisions. Ronald Schmelzer talked about this in his article for Forbes [2], where he dismantled Humby's saying through precise reasoning.
A New Vision for A.I.
Anant Madabhushi was ready for the next step in his career as a researcher and educator. He was already widely recognized as a pioneer in the emerging field of machine learning--specifically for medical imaging and computer-assisted diagnoses. He had authored more than 450 peer-reviewed publications and held over one hundred patents in AI, radiomics, computational pathology, and computer vision. He had even seen his name printed in major consumer publications such as Business Insider and Scientific American that spread the word about how algorithms he's created have greatly improved the accuracy of diagnosing cancer. But Madabhushi, a professor of biomedical engineering at Case Western Reserve University, wanted more. He wanted to break out of the lab and share his specialized knowledge of AI with doctors and clinicians who could put it to use in health care systems and hospitals.
- Health & Medicine > Diagnostic Medicine (0.92)
- Health & Medicine > Health Care Technology (0.60)
- Health & Medicine > Therapeutic Area > Oncology (0.37)
Research shows AI is often biased. Here's how to make algorithms work for all of us
Can you imagine a just and equitable world where everyone, regardless of age, gender or class, has access to excellent healthcare, nutritious food and other basic human needs? Are data-driven technologies such as artificial intelligence and data science capable of achieving this – or will the bias that already drives real-world outcomes eventually overtake the digital world, too? Bias represents injustice against a person or a group. A lot of existing human bias can be transferred to machines because technologies are not neutral; they are only as good, or bad, as the people who develop them. To explain how bias can lead to prejudices, injustices and inequality in corporate organizations around the world, I will highlight two real-world examples where bias in artificial intelligence was identified and the ethical risk mitigated.
Data Bias in Machine Learning: Implications for Social Justice
Machine learning and artificial intelligence have taken organizations to new heights of innovation, growth, and profits thanks to their ability to analyze data efficiently and with extreme accuracy. However, the inherent nature of some algorithms such as black-box models have been proven, at times, to be unfair and lack transparency, leading to multiplicated bias and detrimental impact on minorities. There are several key issues presented by black-box models, and they all work together to further bias data. The most prominent are models fed with data that is historically biased to begin with, and fed by humans who are biased by nature. In addition, because data analysts can only see the inputs and outputs but not the internal workings of how results are determined, machine learning is constantly aggregating this data, including personal data.
How Amazon's Moratorium on Facial Recognition Tech Is Different From IBM's and Microsoft's
Just two weeks ago, facial recognition technology seemed unstoppable. At the beginning of this year, for instance, news reports cast a light on the secretive company Clearview AI, which scraped social media sites for photos to build a database of more than more than 3 billion photos, sold to law enforcement. Then came a sea change: On Monday, in a letter to Congress, IBM announced it would stop the sale of "general purpose" facial recognition software. On Wednesday, Amazon announced a one-year moratorium on police use of its Rekognition technology by law enforcement, inviting Congress to "put in place stronger regulations to govern the ethical use" of the technology. Amazon in its statement said that, "Congress appears ready to take on this challenge," referring to the mounting pressure to make fundamental changes to U.S. law enforcement following the killing of George Floyd by the Minneapolis police, and law enforcement's heavy-handed and violent response to the Black Lives Matter protests.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.55)
- North America > United States > Arizona (0.05)
New humanities-led network will put social justice at the heart of AI research
The humanities-led network will build on research in AI ethics, orienting it around practical issues of social justice, distribution, governance and design, and seek to inform the development of policy and practice. The JUST AI network will connect researchers and practitioners from across philosophy, law, media and communications, human-computer interaction, ethnography, user-centred design, data science, computer sciences and social sciences. It will begin by mapping the landscape and identifying opportunities to join up research across disciplines. It will deliver a programme of activity to include workshops, written and creative outputs and peer-reviewed articles, and lay the foundations for future multidisciplinary research collaborations. Research and debate about the ethical and social risks and impacts of data and AI is often focussed on either abstract principles such as fairness, transparency and rights, or on purely technical approaches.
Artificial Intelligence: Threat or Useful Tool for Social Justice?
Perhaps the most frequent objection to the development of artificial intelligence is the lack of certainty over whether such a powerful innovation will genuinely be good for humanity. Who is responsible if an AI-powered device harms someone? How will we ever know whether AI can behave morally? Where should AI be put to use, and where is it better off left out? These are fundamental questions that yield many different answers from experts in the field of Artificial Intelligence Ethics. This field, which has exploded because of the rapid growth of artificial intelligence, explores the philosophical issues posed by this new technology.
- North America > United States > Michigan > Genesee County > Flint (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)